Goto

Collaborating Authors

 amd and nvidia


Intel, AMD and Nvidia propose new standard to make AI processing more efficient

#artificialintelligence

In pursuit of faster and more efficient AI system development, Intel, AMD and Nvidia today published a draft specification for what they refer to as a common interchange format for AI. While voluntary, the proposed "8-bit floating point (FP8)" standard, they say, has the potential to accelerate AI development by optimizing hardware memory usage and work for both AI training (i.e., engineering AI systems) and inference (running the systems). When developing an AI system, data scientists are faced with key engineering choices beyond simply collecting data to train the system. One is selecting a format to represent the weights of the system -- weights being the factors learned from the training data that influence the system's predictions. Weights are what enable a system like GPT-3 to generate whole paragraphs from a sentence-long prompt, for example, or DALL-E 2 to create photorealistic portraits from a caption.


The demand for AI is helping Nvidia and AMD leapfrog Intel

#artificialintelligence

Intel is the king of a shrinking kingdom. Almost every traditional desktop or laptop PC runs on the Santa Clara company's processors, but that tradition is fast being eroded by more mobile, ARM-powered alternatives. Apple's most important personal computers now run iOS, Google's flagship Chromebook has an ARM flavor, and Microsoft just announced Windows for ARM. And what's more, the burden of processing tasks is shifting away from the personal device and distributed out to networks of server farms up in the proverbial cloud, leaving Intel with a big portfolio of chips and no obvious customer to sell millions of them to. If you want to talk about the most influential chip company in history, Intel's name is the one you want.